Goto

Collaborating Authors

 historical figure


Cognitively-Inspired Episodic Memory Architectures for Accurate and Efficient Character AI

Gonzalez, Rafael Arias, DiPaola, Steve

arXiv.org Artificial Intelligence

Large language models show promise for embodying historical characters in dialogue systems, but existing approaches face a critical trade-off: simple retrieval-augmented generation produces shallow responses, while multi-stage reflection achieves depth at prohibitive latency. We present an architecture that resolves this tension through offline data augmentation and efficient parallel retrieval from structured episodic memory. Our system transforms biographical data into 1,774 enriched first-person memories with affective-semantic metadata, then employs two-stage retrieval achieving 0.52s prompt generation. Evaluation using LLM-as-judge and RAGAs metrics shows our approach achieves parity with traditional RAG on GPT-4 while significantly outperforming it on smaller models (GPT-3.5, GPT-3), suggesting particular value for resource-constrained deployments. Beyond dialogue, the structured memory enables novel visualization tools: spatiotemporal heatmaps, emotional trajectory analysis, and interactive path tracking, positioning the system as both a dialogue interface and research tool for biographical analysis. We use Van Gogh as a test case, but the architecture is generalizable to any historical figure with substantial textual records, offering a practical framework for educational, museum, and research applications requiring both accuracy and efficiency


Bryan Cranston thanks OpenAI for cracking down on Sora 2 deepfakes

The Guardian

Bryan Cranston pictured speaking at a Sag-Aftra strike rally in 2023 in New York. The Breaking Bad actor went to the union with concerns after users of OpenAI's generative video platform Sora 2 were able to generate his likeness without his consent. Bryan Cranston pictured speaking at a Sag-Aftra strike rally in 2023 in New York. The Breaking Bad actor went to the union with concerns after users of OpenAI's generative video platform Sora 2 were able to generate his likeness without his consent. Users of generative AI video app were able to recreate the Breaking Bad actor's likeness without his consent, which OpenAI called'unintentional' Bryan Cranston has said he is "grateful" to OpenAI for cracking down on deepfakes of himself on the company's generative AI video platform Sora 2, after users were able to generate his voice and likeness without his consent.


OpenAI temporarily stops AI deepfakes of Martin Luther King Jr

BBC News

OpenAI has temporarily stopped its artificial intelligence (AI) app Sora creating deepfake videos portraying Dr Martin Luther King Jr, following a request from his estate. It said disrespectful content had been generated about the civil rights campaigner. Sora has become popular in the US for making hyper-realistic AI-generated videos, which has led to people sharing clips of deceased celebrities and historical figures in outlandish and often offensive scenarios. OpenAI said it would pause images of Dr King as it strengthens guardrails for historical figures - but it continues to allow people to make clips of others. The firm has faced controversy over this stance, as videos featuring notable figures such as President John F. Kennedy, Queen Elizabeth II and Professor Stephen Hawking have been shared widely online.


'Legacies condensed to AI slop': OpenAI Sora videos of the dead raise alarm with legal experts

The Guardian

After launching in October in the US and Canada via invitation only, OpenAI's video app, Sora 2, hit 1m downloads in just five days. After launching in October in the US and Canada via invitation only, OpenAI's video app, Sora 2, hit 1m downloads in just five days. The video app can produce realistic deepfakes of Marx shopping and MLK Jr trolling. Some say using'historical figures' is the company's way of testing the legal waters L ast night I was flicking through a dating app. One guy stood out: "Henry VIII, 34, King of England, nonmonogamy".


Elon Musk's Grok Is Calling for a New Holocaust

The Atlantic - Technology

The year is 2025, and an AI model belonging to the richest man in the world has turned into a neo-Nazi. Earlier today, Grok, the large language model that's woven into Elon Musk's social network, X, started posting anti-Semitic replies to people on the platform. Grok praised Hitler for his ability to "deal with" anti-white hate. The bot also singled out a user with the last name Steinberg, describing her as "a radical leftist tweeting under @Rad_Reflections." Then, in an apparent attempt to offer context, Grok spat out the following: "She's gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods, calling them'future fascists.' Classic case of hate dressed as activism--and that surname? Every damn time, as they say."


DoYouTrustAI: A Tool to Teach Students About AI Misinformation and Prompt Engineering

Driscoll, Phillip, Kumar, Priyanka

arXiv.org Artificial Intelligence

AI, especially Large Language Models (LLMs) like ChatGPT, have rapidly developed and gained widespread adoption in the past five years, shifting user preference from traditional search engines. However, the generative nature of LLMs raises concerns about presenting misinformation as fact. To address this, we developed a web-based application that helps K-12 students enhance critical thinking by identifying misleading information in LLM responses about major historical figures. In this paper, we describe the implementation and design details of the DoYouTrustAI tool, which can be used to provide an interactive lesson which teaches students about the dangers of misinformation and how believable generative AI can make it seem. The DoYouTrustAI tool utilizes prompt engineering to present the user with AI generated summaries about the life of a historical figure. These summaries can be either accurate accounts of that persons life, or an intentionally misleading alteration of their history. The user is tasked with determining the validity of the statement without external resources. Our research questions for this work were:(RQ1) How can we design a tool that teaches students about the dangers of misleading information and of how misinformation can present itself in LLM responses? (RQ2) Can we present prompt engineering as a topic that is easily understandable for students? Our findings highlight the need to correct misleading information before users retain it. Our tool lets users select familiar individuals for testing to reduce random guessing and presents misinformation alongside known facts to maintain believability. It also provides pre-configured prompt instructions to show how different prompts affect AI responses. Together, these features create a controlled environment where users learn the importance of verifying AI responses and understanding prompt engineering.


'It's been a challenge': Assassin's Creed Shadows and the quest to bring feudal Japan to life

The Guardian

More than four years after its announcement and after two last-minute delays, the latest title in Ubisoft's historical fiction series Assassin's Creed will finally be released on Thursday. Set in Japan in 1579, a time of intense civil war dominated by the feudal lord Oda Nobunaga, it follows two characters navigating their way through the bloody chaos: a female shinobi named Fujibayashi Naoe, and Yasuke, an African slave turned samurai. Japan has been the series' most-requested setting for years, Ubisoft says. "I've been on [this] franchise for 16 years and I think every time we start a new game, Japan comes up and we ask, is this the time?" says executive producer Marc-Alexis Coté. "We've never pushed beyond the conception phase with Japan until this one." The game comes at a crucial time for Ubisoft after the disappointing performance of last year's titles Star Wars Outlaws, Skull and Bones and Prince of Persia: The Lost Crown, and the expensive closure of live service shooter XDefiant.


From Pampas to Pixels: Fine-Tuning Diffusion Models for Ga\'ucho Heritage

Amadeus, Marcellus, Castañeda, William Alberto Cruz, Zanella, André Felipe, Mahlow, Felipe Rodrigues Perche

arXiv.org Artificial Intelligence

Generative AI has become pervasive in society, witnessing significant advancements in various domains. Particularly in the realm of Text-to-Image (TTI) models, Latent Diffusion Models (LDMs), showcase remarkable capabilities in generating visual content based on textual prompts. This paper addresses the potential of LDMs in representing local cultural concepts, historical figures, and endangered species. In this study, we use the cultural heritage of Rio Grande do Sul (RS), Brazil, as an illustrative case. Our objective is to contribute to the broader understanding of how generative models can help to capture and preserve the cultural and historical identity of regions. The paper outlines the methodology, including subject selection, dataset creation, and the fine-tuning process. The results showcase the images generated, alongside the challenges and feasibility of each concept. In conclusion, this work shows the power of these models to represent and preserve unique aspects of diverse regions and communities.


We 'interviewed' Harriet Tubman using AI. It got a little weird.

Washington Post - Technology News

Harriet Tubman didn't give many interviews in her lifetime, and when she did, they were generally conducted by one of her friends, Sarah Hopkins Bradford, a White children's book author in Upstate New York, where Tubman spent the last decades of her life. The result of those interviews were two biographies, published in 1869 and 1886. Though Bradford obviously admired Tubman, the books suffer from her sometimes patronizing attitude toward her subject, her use of racial slurs and her awkward attempts to re-create the speech patterns of a Black woman raised enslaved in Maryland. Some of the long "quotes" from Tubman were completely made up, and it shows. So I was curious to see what would happen recently when I had my own "interview" with Tubman -- using the online educator Khan Academy's new artificial intelligence learning tool Khanmigo, which enables users to have live chats with dozens of simulated historical figures like Abigail Adams, Genghis Khan, Montezuma and Winston Churchill. And if so, would it come off horribly, a 21st-century minstrelsy?


AI imagines what historical figures like JESUS and Cleopatra would look like if they took a SELFIE - UK TOPNews.MEDIA

#artificialintelligence

No living human can imagine what it was like to sit at the Last Supper or stand at Cleopatra's court, but artificial intelligence has given us a first-person look at these epic events. A freelance film editor recently shared a gallery of realistic images of historical figures taking selfies. He spent months developing a formula for clues, language and photographic elements. Duncan Thomsen, 53, used Midjourney software, which generates images from natural language descriptions. The images also show smiling soldiers at the Battle of Waterloo and the Battle of Agincourt, as well as a smiling Napoleon.